210 research outputs found

    The FreeD - A Handheld Digital Milling Device for Craft and Fabrication

    Get PDF
    We present an approach to combine digital fabrication and craft that is focused on a new fabrication experience. The FreeD is a hand-held, digitally controlled, milling device. It is guided and monitored by a computer while still preserving gestural freedom. The computer intervenes only when the milling bit approaches the 3D model, which was designed beforehand, either by slowing down the spindle's speed or by drawing back the shaft. The rest of the time it allows complete freedom, allowing the user to manipulate and shape the work in any creative way. We believe The FreeD will enable a designer to move in between the straight boundaries of established CAD systems and the free expression of handcraft

    HCU400: An Annotated Dataset for Exploring Aural Phenomenology Through Causal Uncertainty

    Full text link
    The way we perceive a sound depends on many aspects-- its ecological frequency, acoustic features, typicality, and most notably, its identified source. In this paper, we present the HCU400: a dataset of 402 sounds ranging from easily identifiable everyday sounds to intentionally obscured artificial ones. It aims to lower the barrier for the study of aural phenomenology as the largest available audio dataset to include an analysis of causal attribution. Each sample has been annotated with crowd-sourced descriptions, as well as familiarity, imageability, arousal, and valence ratings. We extend existing calculations of causal uncertainty, automating and generalizing them with word embeddings. Upon analysis we find that individuals will provide less polarized emotion ratings as a sound's source becomes increasingly ambiguous; individual ratings of familiarity and imageability, on the other hand, diverge as uncertainty increases despite a clear negative trend on average

    HALO: Wearable Lighting

    Get PDF
    What if lighting were not fixed to our architecture but becomes part of our body? Light would be only where it is needed. Buildings would light up brightly when busy, and dim down when people leave. Lighting would become more energy efficient, more personal, and colorful, tailored to individual needs. What applications beyond illumination would be possible in such a scenario? Halo is a wearable lighting device that aims to investigate this question. More specifically Halo explores the potential of body-centered lighting technology to alter appearance and create a personal space for its wearer. A ring of colored LEDs frames the wearer's face, putting her into the light she desires. Borrowing from both theatrical and photographic lighting design, Halo has several different lighting compositions that make the wearer appear happy, sad, energetic, mysterious, etc. Using a smart phone application, the wearer can switch between these modes. She can also let the application adjust automatically depending on her activities. Halo is an experimental technology that combines function and fashion---a platform to probe the future of wearable lighting

    WristFlex: low-power gesture input with wrist-worn pressure sensors

    Get PDF
    In this paper we present WristFlex, an always-available on-body gestural interface. Using an array of force sensitive resistors (FSRs) worn around the wrist, the interface can distinguish subtle finger pinch gestures with high accuracy (>80 %) and speed. The system is trained to classify gestures from subtle tendon movements on the wrist. We demonstrate that WristFlex is a complete system that works wirelessly in real-time. The system is simple and light-weight in terms of power consumption and computational overhead. WristFlex's sensor power consumption is 60.7 uW, allowing the prototype to potentially last more then a week on a small lithium polymer battery. Also, WristFlex is small and non-obtrusive, and can be integrated into a wristwatch or a bracelet. We perform user studies to evaluate the accuracy, speed, and repeatability. We demonstrate that the number of gestures can be extended with orientation data from an accelerometer. We conclude by showing example applications.National Science Foundation (U.S.) (NSF award 1256082

    Space Webs as Infrastructure for Crawling Sensors on Low Gravity Bodies

    Get PDF
    This paper presents a mission concept in which a rope or a net is used to grapple onto a low-gravity body of interest. The net doubles as infrastructure for a network of tiny crawlers that move across the net’s surface primarily for applications in in-situ distributed sensing. As an initial application area, we consider deploying a network of distributed spectrometers across the surface of an asteroid for high spatial resolution material characterization. We present a first prototype for a rope crawling mechanism as well as a study of a new-to-market chip-sized spectroscope as a candidate sensing payload for the crawlers. Some evidence is found for the sensor’s ability to discriminate between high-iron and low-iron meteorite samples

    A mobile interactive robot for gathering structured social video

    Get PDF
    Documentaries are typically captured in a very structured way, using teams to film and interview people. We developed an autonomous method for capturing structured cinéma vérité style documentaries through an interactive robotic camera, which was used as a mobile physical agent to facilitate interaction and story gathering within a ubiquitous media framework. We sent this robot out to autonomously gather human narrative about its environment. The robot had a specific story capture goal and leveraged humans to attain that goal. The robot collected a 1st person view of stories unfolding in real life, and as it engaged with its subjects via a preset dialog, these media clips were intrinsically structured. We evaluated this agent by way of determining "complete" vs. "incomplete" interactions. "Complete" interactions were those that generated viable and interesting videos, which could be edited together into a larger narrative. It was found that 30% of the interactions captured were "complete" interactions. Our results suggested that changes in the system would only produce incrementally more "complete" interactions, as external factors like natural bias or busyness of the user come into play. The types of users who encountered the robot were fairly polar; either they wanted to interact or did not - very few partial interactions went on for more than 1 minute. Users who partially interacted with the robot were found to treat it rougher than those who completed the full interaction. It was also determined that this type of limited-interaction system is best suited for short-term encounters. At the end of the study, a short cinéma vérité documentary showcasing the people and activity in our building was easily produced from the structured videos that were captured, indicating the utility of this approach.Massachusetts Institute of Technology. Media Laborator

    Compact, configurable inertial gesture recognition

    Get PDF

    ChainMail: A configurable multimodal lining to enable sensate surfaces and interactive objects

    Get PDF
    The ChainMail system is a scalable electronic sensate skin that is designed as a dense sensor network. ChainMail is built from small (1"x1") rigid circuit boards attached to their neighbors with flexible interconnects that allow the skin to be conformally arranged and manipulated. Each board contains an embedded processor together with a suite of thirteen sensors, providing dense, multimodal capture of proximate and contact phenomena. This system forms a sensate lining that can be applied to an object, device, or surface to enable interactivity. Under extended testing, we demonstrate a flexible skin to detect and respond to a variety of stimuli while running quickly and efficiently.National Science Foundation (U.S.) (Graduate Research Fellowship number 2007050798

    Ubicorder: A mobile device for situated interactions with sensor networks

    Get PDF
    The Ubicorder is a mobile, location and orientation aware device for browsing and interacting with real-time sensor network data. In addition to browsing data, the Ubicorder also provides a graphical user interface (GUI) that users can use to define inference rules. These inference rules detect sensor data patterns, and translate them to higher-order events. Rules can also be recursively combined to form an expressive and robust vocabulary for detecting real-world phenomena, thus enabling users to script higher level and relevant responses to distributed sensor stimuli. The Ubicorder’s mobile, handheld form-factor enables users to easily bring the device to the phenomena of interest, hence simultaneously observe or cause real-world stimuli and manipulate in-situ the event detection rules easily using its graphical interface. In a first-use user study, participants without any prior sensor network experience rated the Ubicorder highly for its usefulness and usability when interacting with a sensor network.Things That Think Consortiu
    • …
    corecore